97 research outputs found

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Toward AUV Survey Design for Optimal Coverage and Localization Using the Cramer Rao Lower Bound

    Full text link
    This paper discusses an approach to using the Cramer Rao Lower Bound (CRLB) as a trajectory design tool for autonomous underwater vehicle (AUV) visual navigation. We begin with a discussion of Fisher Information as a measure of the lower bound of uncertainty in a simultaneous localization and mapping (SLAM) pose-graph. Treating the AUV trajectory as an non-random parameter, the Fisher information is calculated from the CRLB derivation, and depends only upon path geometry and sensor noise. The effect of the trajectory design parameters are evaluated by calculating the CRLB with different parameter sets. Next, optimal survey parameters are selected to improve the overall coverage rate while maintaining an acceptable level of localization precision for a fixed number of pose samples. The utility of the CRLB as a design tool in pre-planning an AUV survey is demonstrated using a synthetic data set for a boustrophedon survey. In this demonstration, we compare the CRLB of the improved survey plan with that of an actual previous hull-inspection survey plan of the USS Saratoga. Survey optimality is evaluated by measuring the overall coverage area and CRLB localization precision for a fixed number of nodes in the graph. We also examine how to exploit prior knowledge of environmental feature distribution in the survey plan.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86049/1/akim-10.pd

    Contact-Aided Invariant Extended Kalman Filtering for Legged Robot State Estimation

    Full text link
    This paper derives a contact-aided inertial navigation observer for a 3D bipedal robot using the theory of invariant observer design. Aided inertial navigation is fundamentally a nonlinear observer design problem; thus, current solutions are based on approximations of the system dynamics, such as an Extended Kalman Filter (EKF), which uses a system's Jacobian linearization along the current best estimate of its trajectory. On the basis of the theory of invariant observer design by Barrau and Bonnabel, and in particular, the Invariant EKF (InEKF), we show that the error dynamics of the point contact-inertial system follows a log-linear autonomous differential equation; hence, the observable state variables can be rendered convergent with a domain of attraction that is independent of the system's trajectory. Due to the log-linear form of the error dynamics, it is not necessary to perform a nonlinear observability analysis to show that when using an Inertial Measurement Unit (IMU) and contact sensors, the absolute position of the robot and a rotation about the gravity vector (yaw) are unobservable. We further augment the state of the developed InEKF with IMU biases, as the online estimation of these parameters has a crucial impact on system performance. We evaluate the convergence of the proposed system with the commonly used quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our experimental evaluation using a Cassie-series bipedal robot shows that the contact-aided InEKF provides better performance in comparison with the quaternion-based EKF as a result of exploiting symmetries present in the system dynamics.Comment: Published in the proceedings of Robotics: Science and Systems 201

    An Overview of AUV Algorithms Research and Testbed at the University of Michigan

    Full text link
    This paper provides a general overview of the autonomous underwater vehicle (AUV) research projects being pursued within the Perceptual Robotics Laboratory (PeRL) at the University of Michigan. Founded in 2007, PeRL's research thrust is centered around improving AUV autonomy via algorithmic advancements in sensor-driven perceptual feedback for environmentally-based real-time mapping, navigation, and control. In this paper we discuss our three major research areas of: (1) real-time visual simultaneous localization and mapping (SLAM); (2) cooperative multi-vehicle navigation; and (3) perception-driven control. Pursuant to these research objectives, PeRL has acquired and significantly modified two commercial off-the-shelf (COTS) Ocean-Server Technology, Inc. Iver2 AUV platforms to serve as a real-world engineering testbed for algorithm development and validation. Details of the design modification, and related research enabled by this integration effort, are discussed herein.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86058/1/reustice-15.pd

    Visually Augmented Navigation in an Unstructured Environment Using a Delayed State History

    Full text link
    This paper describes a framework for sensor fusion of navigation data with camera-based 5 DOF relative pose measurements for 6 DOF vehicle motion in an unstructured 3D underwater environment. The fundamental goal of this work is to concurrently sstimate online current vehicle position and its past trajectory. This goal is framed within the context of improving mobile robot navigation to support sub-sea science and exploration. Vehicle trajectory is represented by a history of poses in an augmented state Kalman filter. Camera spatial constraints from overlapping imagery provide partial observation of these posa and are used to enforce consislency and provide a mechanism for loop-closure. The multi-sensor camera+navigation framework is shown to have compelling advantages over a camera-only based approach by 1) improving the robustness of pairwise image registration, 2) setting the free gauge scale, and 3) allowing for a unconnected camera graph topology. Results are shown for a real world data set collected by an autonomous underwater vehicle in an unstructured undersea environment.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86055/1/reustice-32.pd

    Towards Bathymetry-Optimized Doppler Re-Navigation for AUVs

    Full text link
    This paper describes a terrain-aided re-navigation algorithm for autonomous underwater vehicles (AUVs) built around optimizing bottom-lock Doppler velocity log (DVL) tracklines relative to a ship derived bathymetric map. The goal of this work is to improve the precision of AUV DVLbased navigation for near-seafloor science by removing the lowfrequency “drift” associated with a dead-reckoned (DR) Doppler navigation methodology. To do this, we use the discrepancy between vehicle-derived vs. ship-derived acoustic bathymetry as a corrective error measure in a standard nonlinear optimization framework. The advantage of this re-navigation methodology is that it exploits existing ship-derived bathymetric maps to improve vehicle navigation without requiring additional infrastructure. We demonstrate our technique for a recent AUV survey of largescale gas blowout features located along the U.S. Atlantic margin.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86050/1/reustice-30.pd

    Large Area 3D Reconstructions from Underwater Surveys

    Full text link
    Robotic underwater vehicles can perform vast optical surveys of the ocean floor. Scientists value these surveys since optical images offer high levels of information and are easily interpreted by humans. Unfortunately the coverage of a single image is limited hy absorption and backscatter while what is needed is an overall view of the survey area. Recent work on underwater mosaics assume planar scenes and are applicable only to Situations without much relief. We present a complete and validated system for processing optical images acquired from an underwater mbotic vehicle to form a 3D reconstruction of the wean floor. Our approach is designed for the most general conditions of wide-baseline imagery (low overlap and presence of significant 3D structure) and scales to hundreds of images. We only assume a calibrated camera system and a vehicle with uncertain and possibly drifting pose information (e.g. a compass, depth sensor and a Doppler velocity Our approach is based on a combination of techniques from computer vision, photogrammetry and mhotics. We use a local to global approach to structure from motion, aided by the navigation sensors on the vehicle to generate 3D suhmaps. These suhmaps are then placed in a common reference frame that is refined by matching overlapping submaps. The final stage of processing is a bundle adjustment that provides the 3D structure, camera poses and uncertainty estimates in a consistent reference frame. We present results with ground-truth for structure as well as results from an oceanographic survey over a coral reef covering an area of appmximately one hundred square meters.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86037/1/opizarro-33.pd
    • …
    corecore